10 research outputs found
A tensor network study of the complete ground state phase diagram of the spin-1 bilinear-biquadratic Heisenberg model on the square lattice
Using infinite projected entangled pair states, we study the ground state
phase diagram of the spin-1 bilinear-biquadratic Heisenberg model on the square
lattice directly in the thermodynamic limit. We find an unexpected partially
nematic partially magnetic phase in between the antiferroquadrupolar and
ferromagnetic regions. Furthermore, we describe all observed phases and discuss
the nature of the phase transitions involved.Comment: 27 pages, 15 figures; v3: adjusted sections 1 and 3, and added a
paragraph to section 5.2.
Emergent Haldane phase in the bilinear-biquadratic Heisenberg model on the square lattice
Infinite projected entangled pair states simulations of the
bilinear-biquadratic Heisenberg model on the square lattice reveal an emergent
Haldane phase in between the previously predicted antiferromagnetic and
3-sublattice 120 magnetically ordered phases. This intermediate phase
preserves SU(2) spin and translational symmetry but breaks lattice rotational
symmetry, and it can be adiabatically connected to the Haldane phase of
decoupled chains. Our results contradict previous studies which found a
direct transition between the two magnetically ordered states.Comment: 5 pages, 4 figures, plus supplemental materia
A ground state study of the spin-1 bilinear-biquadratic Heisenberg model on the triangular lattice using tensor networks
Making use of infinite projected entangled pair states, we investigate the
ground state phase diagram of the nearest-neighbor spin-1 bilinear-biquadratic
Heisenberg model on the triangular lattice. In agreement with previous studies,
we find the ferromagnetic, 120 degree magnetically ordered, ferroquadrupolar
and antiferroquadrupolar phases, and confirm that all corresponding phase
transitions are first order. Moreover, we provide an accurate estimate of the
location of the ferroquadrupolar to 120 degree magnetically ordered phase
transition, thereby fully establishing the phase diagram. Also, we do not
encounter any signs of the existence of a quantum paramagnetic phase. In
particular, contrary to the equivalent square lattice model, we demonstrate
that on the triangular lattice the one-dimensional Haldane phase does not reach
all the way up to the two-dimensional limit. Finally, we investigate the
possibility of an intermediate partially-magnetic partially-quadrupolar phase
close to , and we show that, also contrary to the square
lattice case, this phase is not present on the triangular lattice.Comment: 14 pages, 15 figures; v2: shortened section II.B and added a
paragraph to section IV.
Quantum Motif Clustering
We present three quantum algorithms for clustering graphs based on
higher-order patterns, known as motif clustering. One uses a straightforward
application of Grover search, the other two make use of quantum approximate
counting, and all of them obtain square-root like speedups over the fastest
classical algorithms in various settings. In order to use approximate counting
in the context of clustering, we show that for general weighted graphs the
performance of spectral clustering is mostly left unchanged by the presence of
constant (relative) errors on the edge weights. Finally, we extend the original
analysis of motif clustering in order to better understand the role of multiple
`anchor nodes' in motifs and the types of relationships that this method of
clustering can and cannot capture.Comment: 51 pages, 11 figure
Quantifying Grover speed-ups beyond asymptotic analysis
The usual method for studying run-times of quantum algorithms is via an asymptotic, worst-case analysis. Whilst useful, such a comparison can often fall short: it is not uncommon for algorithms with a large worst-case run-time to end up performing well on instances of practical interest. To remedy this it is necessary to resort to run-time analyses of a more empirical nature, which for sufficiently small input sizes can be performed on a quantum device or a simulation thereof. For larger input sizes, alternative approaches are required.
In this paper we consider an approach that combines classical emulation with rigorous complexity bounds: simulating quantum algorithms by running classical versions of the sub-routines, whilst simultaneously collecting information about what the run-time of the quantum routine would have been if it were run instead. To do this accurately and efficiently for very large input sizes, we describe an estimation procedure that provides provable guarantees on the estimates that it obtains. A nice feature of this approach is that it allows one to compare the performance of quantum and classical algorithms on particular inputs of interest, rather than only on those that allow for an easier mathematical analysis.
We apply our method to some simple quantum speedups of classical heuristic algorithms for solving the well-studied MAX-k-SAT optimization problem. To do this we first obtain some rigorous bounds (including all constants) on the expected- and worst-case complexities of two important quantum sub-routines, which improve upon existing results and might be of broader interest: Grover search with an unknown number of marked items, and quantum maximum-finding. Our results suggest that such an approach can provide insightful and meaningful information, in particular when the speedup is of a small polynomial nature
Quantum algorithms for community detection and their empirical run-times
We apply our recent work on empirical estimates of quantum speedups to the practical task of community detection in complex networks. We design several quantum variants of a popular classical algorithm -- the Louvain algorithm for community detection -- and first study their complexities in the usual way, before analysing their complexities empirically across a variety of artificial and real inputs. We find that this analysis yields insights not available to us via the asymptotic analysis, further emphasising the utility in such an empirical approach. In particular, we observe that a complicated quantum algorithm with a large asymptotic speedup might not be the fastest algorithm in practice, and that a simple quantum algorithm with a modest speedup might in fact be the one that performs best. Moreover, we repeatedly find that overheads such as those arising from the need to amplify the success probabilities of quantum sub-routines such as Grover search can nullify any speedup that might have been suggested by a theoretical worst- or expected-case analysis
A tensor network study of the complete ground state phase diagram of the spin-1 bilinear-biquadratic Heisenberg model on the square lattice
Using infinite projected entangled pair states, we study the ground state
phase diagram of the spin-1 bilinear-biquadratic Heisenberg model on the square
lattice directly in the thermodynamic limit. We find an unexpected partially
nematic partially magnetic phase in between the antiferroquadrupolar and
ferromagnetic regions. Furthermore, we describe all observed phases and discuss
the nature of the phase transitions involved
Quantifying Grover speed-ups beyond asymptotic analysis
The usual method for studying run-times of quantum algorithms is via an asymptotic, worst-case analysis. Whilst useful, such a comparison can often fall short: it is not uncommon for algorithms with a large worst-case run-time to end up performing well on instances of practical interest. To remedy this it is necessary to resort to run-time analyses of a more empirical nature, which for sufficiently small input sizes can be performed on a quantum device or a simulation thereof. For larger input sizes, alternative approaches are required.
In this paper we consider an approach that combines classical emulation with rigorous complexity bounds: simulating quantum algorithms by running classical versions of the sub-routines, whilst simultaneously collecting information about what the run-time of the quantum routine would have been if it were run instead. To do this accurately and efficiently for very large input sizes, we describe an estimation procedure that provides provable guarantees on the estimates that it obtains. A nice feature of this approach is that it allows one to compare the performance of quantum and classical algorithms on particular inputs of interest, rather than only on those that allow for an easier mathematical analysis.
We apply our method to some simple quantum speedups of classical heuristic algorithms for solving the well-studied MAX-k-SAT optimization problem. To do this we first obtain some rigorous bounds (including all constants) on the expected- and worst-case complexities of two important quantum sub-routines, which improve upon existing results and might be of broader interest: Grover search with an unknown number of marked items, and quantum maximum-finding. Our results suggest that such an approach can provide insightful and meaningful information, in particular when the speedup is of a small polynomial nature
Quantum algorithms for community detection and their empirical run-times
We apply our recent work on empirical estimates of quantum speedups to the practical task of community detection in complex networks. We design several quantum variants of a popular classical algorithm -- the Louvain algorithm for community detection -- and first study their complexities in the usual way, before analysing their complexities empirically across a variety of artificial and real inputs. We find that this analysis yields insights not available to us via the asymptotic analysis, further emphasising the utility in such an empirical approach. In particular, we observe that a complicated quantum algorithm with a large asymptotic speedup might not be the fastest algorithm in practice, and that a simple quantum algorithm with a modest speedup might in fact be the one that performs best. Moreover, we repeatedly find that overheads such as those arising from the need to amplify the success probabilities of quantum sub-routines such as Grover search can nullify any speedup that might have been suggested by a theoretical worst- or expected-case analysis